164 research outputs found

    Embarrassingly Parallel Search

    Get PDF
    International audienceWe propose the Embarrassingly Parallel Search, a simple and efficient method for solving constraint programming problems in parallel. We split the initial problem into a huge number of independent subproblems and solve them with available workers (i.e., cores of machines). The decomposition into subproblems is computed by selecting a subset of variables and by enumerating the combinations of values of these variables that are not detected inconsistent by the propagation mechanism of a CP Solver. The experiments on satisfaction problems and on optimization problems suggest that generating between thirty and one hundred subproblems per worker leads to a good scalability. We show that our method is quite competitive with the work stealing approach and able to solve some classical problems at the maximum capacity of the multi-core machines. Thanks to it, a user can parallelize the resolution of its problem without modifying the solver or writing any parallel source code and can easily replay the resolution of a problem

    On Approximating Restricted Cycle Covers

    Get PDF
    A cycle cover of a graph is a set of cycles such that every vertex is part of exactly one cycle. An L-cycle cover is a cycle cover in which the length of every cycle is in the set L. The weight of a cycle cover of an edge-weighted graph is the sum of the weights of its edges. We come close to settling the complexity and approximability of computing L-cycle covers. On the one hand, we show that for almost all L, computing L-cycle covers of maximum weight in directed and undirected graphs is APX-hard and NP-hard. Most of our hardness results hold even if the edge weights are restricted to zero and one. On the other hand, we show that the problem of computing L-cycle covers of maximum weight can be approximated within a factor of 2 for undirected graphs and within a factor of 8/3 in the case of directed graphs. This holds for arbitrary sets L.Comment: To appear in SIAM Journal on Computing. Minor change

    Optimal General Matchings

    Full text link
    Given a graph G=(V,E)G=(V,E) and for each vertex vVv \in V a subset B(v)B(v) of the set {0,1,,dG(v)}\{0,1,\ldots, d_G(v)\}, where dG(v)d_G(v) denotes the degree of vertex vv in the graph GG, a BB-factor of GG is any set FEF \subseteq E such that dF(v)B(v)d_F(v) \in B(v) for each vertex vv, where dF(v)d_F(v) denotes the number of edges of FF incident to vv. The general factor problem asks the existence of a BB-factor in a given graph. A set B(v)B(v) is said to have a {\em gap of length} pp if there exists a natural number kB(v)k \in B(v) such that k+1,,k+pB(v)k+1, \ldots, k+p \notin B(v) and k+p+1B(v)k+p+1 \in B(v). Without any restrictions the general factor problem is NP-complete. However, if no set B(v)B(v) contains a gap of length greater than 11, then the problem can be solved in polynomial time and Cornuejols \cite{Cor} presented an algorithm for finding a BB-factor, if it exists. In this paper we consider a weighted version of the general factor problem, in which each edge has a nonnegative weight and we are interested in finding a BB-factor of maximum (or minimum) weight. In particular, this version comprises the minimum/maximum cardinality variant of the general factor problem, where we want to find a BB-factor having a minimum/maximum number of edges. We present an algorithm for the maximum/minimum weight BB-factor for the case when no set B(v)B(v) contains a gap of length greater than 11. This also yields the first polynomial time algorithm for the maximum/minimum cardinality BB-factor for this case

    Identically self-blocking clutters

    Get PDF
    A clutter is identically self-blocking if it is equal to its blocker. We prove that every identically self-blocking clutter different from is nonideal. Our proofs borrow tools from Gauge Duality and Quadratic Programming. Along the way we provide a new lower bound for the packing number of an arbitrary clutter

    An In-Out Approach to Disjunctive Optimization

    Full text link
    Abstract. Cutting plane methods are widely used for solving convex optimization problems and are of fundamental importance, e.g., to pro-vide tight bounds for Mixed-Integer Programs (MIPs). This is obtained by embedding a cut-separation module within a search scheme. The importance of a sound search scheme is well known in the Constraint Programming (CP) community. Unfortunately, the “standard ” search scheme typically used for MIP problems, known as the Kelley method, is often quite unsatisfactory because of saturation issues. In this paper we address the so-called Lift-and-Project closure for 0-1 MIPs associated with all disjunctive cuts generated from a given set of elementary disjunction. We focus on the search scheme embedding the generated cuts. In particular, we analyze a general meta-scheme for cutting plane algorithms, called in-out search, that was recently proposed by Ben-Ameur and Neto [1]. Computational results on test instances from the literature are presented, showing that using a more clever meta-scheme on top of a black-box cut generator may lead to a significant improvement

    Terrestrial Implications of Cosmological Gamma-Ray Burst Models

    Get PDF
    The observation by the BATSE instrument on the Compton Gamma Ray Observatory that gamma-ray bursts (GRBs) are distributed isotropically around the Earth but nonuniformly in distance has led to the widespread conclusion that GRBs are most likely to be at cosmological distances, making them the most luminous sources known in the Universe. If bursts arise from events that occur in normal galaxies, such as neutron star binary inspirals, then they will also occur in our Galaxy about every hundred thousand to million years. The gamma-ray flux at the Earth due to a Galactic GRB would far exceed that from even the largest solar flares. The absorption of this radiation in the atmosphere would substantially increase the stratospheric nitric oxide concentration through photodissociation of N2_2, greatly reducing the ozone concentration for several years through NOx_x catalysis, with important biospheric effects due to increased solar ultraviolet flux. A nearby GRB may also leave traces in anomalous radionuclide abundances.Comment: uuencoded, gzip-ed postscript; 6 pages; submitted to ApJ Letter

    Maximum gradient embeddings and monotone clustering

    Full text link
    Let (X,d_X) be an n-point metric space. We show that there exists a distribution D over non-contractive embeddings into trees f:X-->T such that for every x in X, the expectation with respect to D of the maximum over y in X of the ratio d_T(f(x),f(y)) / d_X(x,y) is at most C (log n)^2, where C is a universal constant. Conversely we show that the above quadratic dependence on log n cannot be improved in general. Such embeddings, which we call maximum gradient embeddings, yield a framework for the design of approximation algorithms for a wide range of clustering problems with monotone costs, including fault-tolerant versions of k-median and facility location.Comment: 25 pages, 2 figures. Final version, minor revision of the previous one. To appear in "Combinatorica
    corecore